Goto

Collaborating Authors

 20-step pgd attack





Robust Few-Shot Learning with Adversarially Queried Meta-Learners

Goldblum, Micah, Fowl, Liam, Goldstein, Tom

arXiv.org Machine Learning

On the other hand, few-shot learning methods are highly vulnerable to adversarial exam ples. The goal of our work is to produce networks which both perform well at few-sh ot tasks and are simultaneously robust to adversarial examples. W e adapt ad versarial training for meta-learning, we adapt robust architectural features to s mall networks for meta-learning, we test pre-processing defenses as an alternativ e to adversarial training for meta-learning, and we investigate the advantages of rob ust meta-learning over robust transfer-learning for few-shot tasks. This work pro vides a thorough analysis of adversarially robust methods in the context of meta-lear ning, and we lay the foundation for future work on defenses for few-shot tasks. Conventional adversarial train ing and pre-processing defenses aim to produce networks that resist attack (Madry et al., 2017; Zhang e t al., 2019; Samangouei et al., 2018), but such defenses rely heavily on the availability of large t raining datasets. In applications that require few-shot learning, such as face recognition from few images, recognition of a v ideo source from a single clip, or recognition of a new object from few exa mple photos, the conventional robust training pipeline breaks down.


Adversarially Robust Distillation

Goldblum, Micah, Fowl, Liam, Feizi, Soheil, Goldstein, Tom

arXiv.org Machine Learning

Knowledge distillation is effective for producing small high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks. We first study how robustness transfers from robust teacher to student network during knowledge distillation. We find that a large amount of robustness may be inherited by the student even when distilled on only clean images. Second, we introduce Adversarially Robust Distillation (ARD) for distilling robustness onto small student networks. ARD is an analogue of adversarial training but for distillation. In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student. In our experiments, we find that ARD student models decisively outperform adversarially trained networks of identical architecture on robust accuracy. Finally, we adapt recent fast adversarial training methods to ARD for accelerated robust distillation.